Language and Compiler Support for Out-of-Core Irregular Applications on Distributed-Memory Multiprocessors
نویسندگان
چکیده
Current virtual memory systems provided for scalable computer systems typically offer poor performance for scientific applications when an application’s working data set does not fit in main memory. As a result, programmers who wish to solve “out-of-core” problems efficiently typically write a separate version of the parallel program with explicit I/O operations. This task is onerous and extremely difficult if the application includes indirect data references. A promising approach is to develop a language support and a compiler system on top of an advanced runtime system which can automatically transform an appropriate in-core program to efficiently operate on out-of-core data. This approach is presented in this paper. Our proposals are discussed in the context of HPF and its compilation environment.
منابع مشابه
Parallelization of Irregular Out-of-Core Applications for Distributed-Memory Systems
Large scale irregular applications involve data arrays and other data structures that are too large to t in main memory and hence reside on disks; such applications are called out-of-core applications. This paper presents techniques for implementing this kind of applications. In particular we present a design for a runtime system to eeciently support parallel execution of irregular out-of-core ...
متن کاملImproving Compiler and Run-Time Support for Irregular Reductions Using Local Writes
Current compilers for distributed-memory multiprocessors parallelize irregular reductions either by generating calls to sophisticated run-time systems (CHAOS) or by relying on replicated buuers and the shared-memory interface supported by software DSMs (TreadMarks). We introduce LocalWrite, a new technique for parallelizing irregular reductions based on the owner-computes rule. It eliminates th...
متن کاملcient Compiler and Run - Time Support for ParallelIrregular
Many scientic applications are comprised of irregular reductions on large data sets. In shared-memory parallel programs, these irregular reductions are typically computed in parallel using replicated buuers, then combined using synchronization. We develop LocalWrite, a new technique which partitions irregular reductions so that each processor computes values only for locally assigned data, elim...
متن کاملEfficient compiler and run-time support for parallel irregular reductions
Many scienti®c applications are comprised of irregular reductions on large data sets. In shared-memory parallel programs, these irregular reductions are typically computed in parallel using replicated buers, then combined using synchronization. We develop LOCALWRITE, a new technique which partitions irregular reductions so that each processor computes values only for locally assigned data, eli...
متن کاملHardware Support for Data Dependence Speculation in Distributed Shared-Memory Multiprocessors Via Cache-block Reconciliation
Data dependence speculation allows a compiler to relax the constraint of data-independence to issue tasks in parallel, increasing the potential for automatic extraction of parallelism from sequential programs. This paper proposes hardware mechanisms to support a data-dependence speculative distributed shared-memory (DDSM) architecture that enable speculative parallelization of programs with irr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1998